Neural Networks
Back to Home
01. Introducing Luis
02. Why "Neural Networks"?
03. Neural Network Architecture
04. Feedforward
05. Backpropagation
06. Training Optimization
07. Testing
08. Overfitting and Underfitting
09. Early Stopping
10. Regularization
11. Regularization 2
12. Dropout
13. Local Minima
14. Vanishing Gradient
15. Other Activation Functions
16. Batch vs Stochastic Gradient Descent
17. Learning Rate Decay
18. Random Restart
19. Momentum
Back to Home
10. Regularization
Regularization
DL 53 Q Regularization
Which gives a smaller error?
x1 + x2
10x1 + 10x2
SOLUTION:
10x1 + 10x2